To be held at IJCAI 2019 in Macao, China, in August 10-16 as part of the IJCAI competition track.
ANAC2019 will be held on 14:00 - 18:00, August 15th in Room 2304, Level 1, Venetian Macao
- August 6th, 2019: ANAC2019 session program and Finalists are uploaded.
- July 5th, 2019: ANAC2019 will be held on 14:00 - 18:00, August 15th in Room 2304, Level 1, Venetian Macao
- June 9th, 2019: Deadline for submiitting agents in Human-Agent League was extended.
- May 15th, 2019: Important dates were updated.
- April 3th, 2019: AIJ found was accepted.
- March 7th, 2019: Call for Participation for all leagues are uploaded.
- January 30th, 2019: Register for mailing list: here.
- January 11st, 2019: Call for ANAC2019 intention was started.
- January 11st, 2019: ANAC2019 webpage was launched.
The Automated Negotiating Agent Competition (ANAC) is an international tournament that has been running since 2010 to bring together researchers from the negotiation community. ANAC provides a unique benchmark for evaluating practical negotiation strategies in multi-issue domains and has the following aims:
This year, we introduce five different negotiation research challenges:
We expect innovative and novel agent strategies will be developed for ANAC2019. After the competition, submitted agents will be made available to the negotiation community as part of a negotiating agent repository within the aforementioned frameworks. The researchers can develop novel negotiating agents and evaluate their agents by comparing their performance with the performance of the ANAC 2018 agents.
Thursday, August 15th 14:00 - 17:30
Room 2304, Level 1, Venetian Macao
Winner | AgentGG | Shaobo Xu and Peihao Ren | University of Southampton | UK |
2nd | KakeSoba | Ryohei Kawata | Tokyo University of Agriculture and Technology | Japan |
3rd | SAGA | Yuta Hosokawa | Tokyo University of Agriculture and Technology | Japan |
Winner | winkyAgent | Siqi Chen and Jie Lin | Tianjin University | China |
2nd | FSEGA2019 | Stancu Anca | Bábes Bolyai University | România |
3rd | AgentGP | Tomoya Fukui | Nagoya Institute of Technology | Japan |
1st | Draft | Bohan Xu and Shadow Pritchard and James Hale and Sandip Sen | University of Tulsa | USA |
2nd | Dona | Eden Shalom Erez, Inon Zuckerman, Galit Haim | Ariel University and The College of Management Academic Studies | Israel |
1st | IFFM | Masanori Hirano, Taisei Mukai, Hiroyasu Matsushima, Kiyoshi Izumi | University of Tokyo | Japan |
2nd | NVM | Enrique Areyan Viqueira, Amy Greenwald | AIST and Brown University | Japan and USA |
3rd | SAHA | Nahum Alvarez | AIST | Japan |
1st | IFFM | Masanori Hirano, Taisei Mukai, Hiroyasu Matsushima, Kiyoshi Izumi | University of Tokyo | Japan |
2nd | NVM | Enrique Areyan Viqueira, Amy Greenwald | AIST and Brown University | Japan and USA |
3rd | FJ2 | Ryohei Kawata | Tokyo University of Agriculture and Technology | Japan |
Honorable Mention Award | Monopoly | Ryoto Ishikawa and Yuta Hosokawa | Tokyo University of Agriculture and Technology | Japan |
Winner | Saitama | Ryohei Kawata | Tokyo University of Agriculture and Technology | Japan |
Honorable Mention Award | Oslo_A | Liora Zaidner, Shahar Zadok, Ori Steinberg, Omry Darwish, Aviram Aviv | Bar-Ilan University | Israel |
Final result of werewolf league is as follows.
http://aiwolf.org/en/archives/2262
The aim for the entrants to the Automated Agents league is to develop an autonomous negotiating agent that is able to negotiate with an opponent on a variety of scenarios. The participants will implement their agents in the Genius platform. Performance of the agents will be evaluated in a tournament setting, where each agent is matched with all other submitted agents.
The participants’ goal is to design winning strategies for bidding, opponent modeling and bid acceptance in a closed negotiating setting; that is, the agents do not know about their opponent’s preferences or strategy.
The main challenge in 2019 is to design a negotiating agent that is able to conduct negotiations with partial preferences (i.e., preference uncertainty). The idea behind this challenge is that in most realistic settings, a negotiating agent does not know what the user wants exactly.
Therefore, this year, preferences of the agent will be given in the form of a partial ranking of outcomes instead of a fully specified utility function. The main challenge is to estimate agent’s own utility function from a given set of outcome rankings and to negotiate well with this limited information.
Genius is a Java-based negotiation platform in which you can create negotiation domains and preference profiles as well as develop negotiating agents. The platform allows you to simulate negotiation sessions and run tournaments. More details can be found by following this link: http://ii.tudelft.nl/genius/
Call for Participation: Automated Agents League CFP
GENIUS platform: http://ii.tudelft.nl/genius/
The Human-Agent Negotiation (HAN) competition is conducted to explore the strategies, nuances, and difficulties in creating realistic and efficient agents whose primary purpose is to negotiate with humans. Previous work on human-agent negotiation has revealed the importance of several features not commonly present in agent-agent negotiation, including retractable and partial offers, emotion exchange, preference elicitation strategies, favors and ledgers behavior, and myriad other topics. To understand these features and better create agents that use them, this competition is designed to be a showcase for the newest work in the negotiating agent community. This year’s challenge will focus primarily on the nuances of repeated negotiations, which require more complex strategies than one-shot negotiations.
This competition is intended to foster agents that can maximize their individual utility when engaged in repeated negotiations with the same human opponent. Human participants will compete against each submitted agent in three back-to-back negotiations. One difference from ANAC2018 is that the structure of each negotiation may be different (i.e., the agent and the human participant may have different preferences in each negotiation). The reason for this change is to reward agents that can establish a cooperative relationship with the human participant, as opposed to rewarding agents that are simply good at learning the human’s preferences.
Like the 2018 challenge, these preferences will be unknown to the opposing side at the beginning of the three negotiations. In this way, agents that do a good job of learning opponent’s preferences will likely outperform agents that do not.
More fundamentally, this approach allows us to capture which agent strategies successfully account for human behavior. While an aggressive strategy in the first negotiation may prove effective, it could have such a backfire effect by the last negotiation that it is not the right choice overall. This year’s challenge will provide insight into these and more choices when designing agents whose primary purpose is to negotiate with humans over time.
IAGO is a platform developed by Mell and Gratch at the University of Southern California. It serves as a testbed for Human-Agent negotiation specifically. IAGO is a web-based servlet hosting system that provides data collection and recording services, a human-usable HTML5 UI, and an API for designing human-like agents.
A full documentation of IAGO is available from the download site, available at https://myiago.com. Additional rules, including submission instructions and pre-registration details, are available there as well. Please refer to the website for the most up-to-date information.
More details can be found by following this link: Human-agent League CFP
IAGO platform: http://people.ict.usc.edu/~mell/IAGO
The SCM league models a real-world scenario characterized by profit-maximizing agents that inhabit a complex, dynamic, negotiation environment. Here, agents must decide about what, with whom, and when to negotiate, as well as how to best coordinate their actions across multiple concurrent negotiations. Another distinguishing feature of the SCM league is the fact that agents’ utility functions are endogenous, meaning they are the product of the market’s evolution, and hence, cannot be dictated to agents in advance of running the simulation. It is an agent’s job to design their utilities for various possible agreements, given their unique production capabilities, and then to negotiate with other agents to contract those that are most favorable to them. Under these conditions, a major determiner of an agent’s wealth, and hence, their final score, will be their ability to position themselves well in the market and negotiate successfully.
Miner, factory manager, and consumer agents can all buy and sell products based on agreements they reach, and then sign as contracts. Such agreements are generated via bilateral negotiations using the alternating offers protocol variant typically used in ANAC competitions. The sequences of offers and counteroffers in these negotiations are private to the negotiators. An offer must specify a buyer, a seller, a product, a quantity, a delivery time, and a unit price. Optionally, an offer can also specify a grace period for converting the offer into a contract, during which time either party may opt out without penalty; and a penalty term, which is incurred by the seller only in case the contract is breached; a buyer does not usually incur this penalty because buyers are required to borrow money from the bank to fulfill their contracts rather than incur a breach. When a contract comes due, the simulator will try to execute it (i.e., move products from the seller’s inventory to the buyer’s, and move money from the buyer’s wallet to the seller’s). If this execution fails, a breach can occur. Breaches can also occur if either party decides not to honor the contract. In all cases of potential breaches, the simulator offers the agents the opportunity to renegotiate.
The simulation environment is under construction and will be ready soon. Python and Java will be supported by the simulation tool. All you need to do to participate in the SCM league is write and submit code for an autonomous agent that acts as a factory manager. Your agent should be robust enough to handle any manufacturing profile, because its profile will not be revealed until seconds before each simulation begins.
The organizing committee will provide full information about the production graph, behavior of consumers, miners, and the default agents: i.e., the base factory managers. The scheduling algorithm and the utility function estimation method of the base factory managers will also be fully disclosed. Participants are free to use these components of the base factory managers, or they can develop their own.
More details can be found by following this link: Supply Chain Management League CFP
SCM platform: NegMAS
Overview: http://www.yasserm.com/scml/overview.pdf
Detailed Description: http://www.yasserm.com/scml/scml.pdf
Submission form: https://forms.gle/VsQm4z9493fqrc9L7
In the Diplomacy game league, entrants to the competition have to develop a negotiation algorithm for the game of Diplomacy. Diplomacy is a strategy game for 7 players. Each player has a number of armies and fleet positioned on a map of Europe and the goal is to conquer half of the "Supply Centers". What makes this game very interesting and different from other board games, however, is that players need to negotiate with each other in order to play well. Players may team up and create plans together to defeat other players. Every participant in this competition must implement a negotiation algorithm using the BANDANA framework. This negotiation algorithm will then be combined with an existing non-negotiating agent (the D-Brane Strategic Module) to form a complete negotiating Diplomacy player.
More details can be found by following this link: Diplomacy Challenge CFP
BANDANA platform: http://www.iiia.csic.es/~davedejonge/bandana/
In this league, participants will implement an agent for a simulation of the social game “werewolf”. In the werewolf game, each player is a member of one of two communities (villager team and werewolf team). The objective of one community is to eliminate all the members of the other community. The main mechanism for this is voting: at each in-game day, all players discuss, negotiate and vote in one player to be removed from the game. Additionally, some players have secret roles that allow them special actions in the game.
Players in the villager team are the majority but do not know the team membership and roles of other players. Players in the werewolf team are in the minority, but they know the team membership (but not roles) of all players. In this way, the strategy of the werewolves is to misdirect the villagers into voting other villagers out, while the strategy of the villagers is to discover which players are villagers, and which players are werewolves.
Therefore, to successfully play the game, an agent must be able to: 1- Build a model of other agents in the game, in order to estimate possible allies and deceivers, and 2- Communicate strategy and information to the other agents, and negotiate a voting pattern that is beneficial to one’s own team.
For the ANAC 2019 Werewolf League, agents will communicate using a fixed protocol that allows agents to make requests, offer information (true or false), and make arguments based on simple logic (such as “I believe Agent 3 is a werewolf because they voted for Agent 5”).
Agents can be written in Java or Python. Information about the server and client software, along with examples, details on the protocol, and more information, can be found on http://aiwolf.org/en/ and https://github.com/aiwolf/.
More details can be found by following this link: Werewolf Game League CFP
Werewolf game platform: AIWolf project page
Code Samples and Howtos: https://github.com/caranha/AIWolfCompo/
Slack for contestant discussion: https://goo.gl/forms/VIXeJXvwg9YN4rHF3
The prize money will be at least $2500 in total (at least $500 per league). The prize will be shared among the top agents - winners.
For any questions of ANAC2019, the main contact is Dr. Reyhan Aydogan <reyhan.aydogan[at]ozyegin.edu.tr>
For ANAC2019 demonstrations, the contact is Johnathan Mell <mell[at]ict.usc.edu>